ε error εˆ ~ Greenhouse Geisser correction (see p. 25) ε Huynh Feldt correction (see p. 25)
|
|
- Daniella Rosalind Jones
- 5 years ago
- Views:
Transcription
1 9: Glossary 14 9 Glossary Symbols: implies is equivalent to x mean of a set of values of x ε error εˆ ~ Greenhouse Geisser correction (see p. 5) ε Huynh Feldt correction (see p. 5) µ mean ρ population correlation r sample correlation r xy or r x. y correlation between x and y r y. a, b, c multiple correlation between y and (a, b, c) ry.( x z) semipartial correlation between y and x, having partialled out z (see p. 100) r y. x z partial correlation between y and x, having partialled out z (see p. 100) sum of (see p. 09) σ X population standard deviation of X s X sample standard deviation of X σ population variance of X X s sample variance of X X Additive model. In within-subjects ANOVA, a structural model that assumes the effects of within-subjects treatments are the same for all subjects. ANCOVA. Analysis of covariance: an ANOVA that uses a covariate as a predictor variable. ANOVA. Analysis of variance. See p. 8 for an explanation of how it works. A priori tests. Tests planned in advance of obtaining the data; compare post hoc tests. Balanced ANOVA. An ANOVA is said to be balanced when all the cells have equal n, when there are no missing cells, and if there is a nested design, when the nesting is balanced so that equal numbers of levels of the nested factor appear in the levels of the factor(s) that they are nested within. This greatly simplifies the computation. Between-subjects (factor or covariate). If each subject is only tested at a single level of an independent variable, the independent variable is called a betweensubjects factor. Compare within-subjects. Carryover effects. See within-subjects. Categorical predictor variable. A variable measured on a nominal scale, whose categories identify class or group membership, used to predict one or more dependent variables. Often called a factor. Continuous predictor variable. A continuous variable used to predict one or more dependent variables. Often called a covariate. Covariance matrix. If you have three variables x, y, z, the covariance matrix, x y z x σ x covxy cov xz denoted, is Σ = where cov y cov xy σ y cov xy is the covariance of yz z cov cov xz yz σ z x and y (= ρ xy σ x σ y where ρ xy is the correlation between x and y and σ x is the variance of x). Obviously, covxx = σ x. It is sometimes used to check for compound symmetry of the covariance matrix, which is a fancy way of saying
2 9: Glossary 15 x y z σ = σ = σ (all numbers on the leading diagonal the same as each other). and cov xy = covyz = covxz (all numbers not on the leading diagonal the same as each other). If there is compound symmetry, there is also sphericity, which is what s important when you re running ANOVAs with within-subjects factors. On the other hand, you can have sphericity without having compound symmetry; see p. 5. Conservative. Apt to give p values that are too large. Contrast. See linear contrast. Covariate. A continuous variable (one that can take any value) used as a predictor variable. Degrees of freedom (df). Estimates of parameters can be based upon different amounts of information. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (d.f. or df). Or, the number of observations free to vary. For example, if you pick three numbers at random, you have 3 df but once you calculate the sample mean, x, you only have two df left, because you can only alter two numbers freely; the third is constrained by the fact that you have fixed x. Or, the number of measurements exceeding the amount absolutely necessary to measure the object (or parameter) in question. To measure the length of a rod requires 1 measurement. If 10 measurements are taken, then the set of 10 measurements has 9 df. In general, the df of an estimate is the number of independent scores that go into the estimate minus the number of parameters estimated from those scores as intermediate steps. For example, if the population variance σ is estimated (by the sample variance s ) from a random sample of n independent scores, then the number of degrees of freedom is equal to the number of independent scores (n) minus the number of parameters estimated as intermediate steps (one, as µ is estimated by x ) and is therefore n 1. Dependent variable. The variable you measure, but do not control. ANOVA is about predicting the value of a single dependent variable using one or more predictor variables. Design matrix. The matrix in a general linear model that specifies the experimental design how different factors and covariates contribute to particular values of the dependent variable(s). Doubly-nested design. One in which there are two levels of nesting (see nested design). Some are described on p Error term. To test the effect of a predictor variable of interest with an ANOVA, the variability attributable to it (MS variable ) is compared to variability attributed to an appropriate error term (MS error ), which measures an appropriate error variability. The error term is valid if the expected mean square for the variable, E(MS variable ), differs from E(MS error ) only in a way attributable solely to the variable of interest. Error variability (or error variance, σ e ). Variability among observations that cannot be attributed to the effects of the independent variable(s). May include measurement error but also the effects of lots of irrelevant variables that are not measured or considered. It may be possible to reduce the error variability by accounting for some of them, and designing our experiment accordingly. For example, if we want to study the effects of two methods of teaching reading on children s reading performance, rather than randomly assigning all our students to teaching method 1 or teaching method, we could split our children into groups with low/medium/high intelligence, and randomly allocate students from each level of intelligence to one of our two teaching methods. If intelligence accounts for some of the variability in reading ability, accounting for it in this way will reduce our error variability. Within-subjects designs take this principle further (but are susceptible to carryover effects). Expected mean square (EMS). The value a mean square (MS) would be expected to have if the null hypothesis were true. F ratio. The ratio of two variances. In ANOVA, the ratio of the mean square (MS) for a predictor variable to the MS of the corresponding error term.
3 9: Glossary 16 Factor. A discrete variable (one that can take only certain values) used as a predictor variable. A categorical predictor. Factors have a certain number of levels. Factorial ANOVA. An ANOVA using factors as predictor variables. The term is often used to refer to ANOVAs involving more than one factor (compare oneway ANOVA). Factorial designs range from the completely randomized design (subjects are randomly assigned to, and serve in only one of several different treatment conditions, i.e. completely between-subjects design), via mixed designs (both between-subjects and within-subjects factors) to completely withinsubjects designs, in which each subject serves in every condition. Fixed factor. A factor that contains all the levels we are interested in (e.g. the factor sex has the levels male and female). Compare random factor and see p. 31. Gaussian distribution. Normal distribution. General linear model. A general way of predicting one or more dependent variables from one or more predictor variables, be they categorical or continuous. Subsumes regression, multiple regression, ANOVA, ANCOVA, MA- NOVA, MANCOVA, and so on. Greenhouse Geisser correction/epsilon. If the sphericity assumption is violated in an ANOVA involving within-subjects factors, you can correct the df for any term involving the WS factor (and the df of the corresponding error term) by multiplying both by this correction factor. Often written εˆ, where 0 < εˆ 1. Originally from Greenhouse & Geisser (1959). Heterogeneity of variance. Opposite of homogeneity of variance. When variances for different treatments are not the same. Hierarchical design. One in which one variable is nested within a second, which is itself nested within a third. A doubly-nested design (such as the splitsplit plot design) is the simplest form of hierarchical designs. They re complex. Homogeneity of variance. When a set of variances are all equal. If you perform an ANOVA with a factor with a levels, the homogeneity of variance assumption is that σ 1 = σ = = σ a = σ e, where σ e is the error variance. Huynh Feldt correction/epsilon. If the sphericity assumption is violated in an ANOVA involving within-subjects factors, you can correct the df for any term involving the WS factor (and the df of the corresponding error term) by multiplying both by this correction factor. Often written ~ ε, where 0 < ~ ε 1. Originally from Huynh & Feldt (1970). Independent variable. The variables thought to be influencing the dependent variable(s). In experiments, independent variables are manipulated. In correlational studies, independent variables are observed. (The advantage of the experiment is the ease of making causal inferences.) Interaction. There is an interaction between factors A and B if the effect of factor A depends on the level of factor B, or vice versa. For example, if your dependent variable is engine speed, and your factors are presence of spark plugs (Y/N) (A) and presence of petrol (Y/N) (B), you will find an interaction such that factor A only influences engine speed at the petrol present level of B; similarly, factor B only influences engine speed at the spark plugs present level of B. This is a binary example, but interactions need not be. Compare main effect, simple effect. Intercept. The contribution of the grand mean to the observations. See p. 65. The F test on the intercept term (MS intercept /MS error ) tests the null hypothesis that the grand mean is zero. Level (of a factor). One of the values that a discrete predictor variable (factor) can take. For example, the factor Weekday might have five levels Monday, Tuesday, Wednesday, Thursday, Friday. We might write the factor as Weekday 5 in descriptions of ANOVA models (as in Tedium = Drowsiness Weekday 5 S ), or write the levels themselves as Weekday 1 Weekday 5. Levene s test (for heterogeneity of variance). Originally from Levene (1960). Tests the assumption of homogeneity of variance. If Levene s test produces a significant result, the assumption of homogeneity of variance cannot be made (this is generally a Bad Thing and suggests that you might need to transform your data to improve the situation; see p. 34).
4 9: Glossary 17 Liberal. Apt to give p values that are too small. Linear contrasts. Comparisons between linear combinations of different groups, used to test specific hypotheses. See p. 75. Linear regression. Predicting Y from X using the equation of a straight line: Y ˆ = bx + a. May be performed with regression ANOVA. Logistic regression. See Howell (1997, pp ). A logistic function is a sigmoid (see If your dependent variable is dichotomous (categorial) but ordered ( flight on time versus flight late, for example) and you wish to predict this (for example, by pilot experience), a logistic function is often better than a straight line. It reflects the fact that the dichotomy imposes a cutoff on some underlying continuous variable (e.g. once your flight delay in seconds continuous variable reaches a certain level, you classify the flight as late dichotomous variable). Dichotomous variables can be converted into variables suitable for linear regression by converting the probability of falling into one category, P(flight late), into the odds of falling into that category, using P( A) odds =, and then into the log odds, using the natural (base e) logarithm P( A) log e (odds) = ln(odds). The probability is therefore a logistic function of the log ln(odds) e odds: probability =, so performing a linear regression on the log ln(odds) 1+ e odds is equivalent to performing a logistic regression on probability. This is pretty much what logistic regression does, give or take some procedural wrinkles. Odds ratios (likelihood ratios), the odds for one group divided by the odds for another group, emerge from logistic regression in the way that slope estimates emerge from linear regression, but the statistical tests involved are different. Logistic regression is a computationally iterative task; there s no simple formula (the computer works out the model that best fits the data iteratively). Main effect. A main effect is an effect of a factor regardless of the other factor(s). Compare simple effect; interaction. MANCOVA. Multivariate analysis of covariance; see MANOVA and ANCOVA. MANOVA. Multivariate ANOVA ANOVA that deals with multiple dependent variables simultaneously. Not covered in this document. For example, if you think that your treatment has a bigger effect on dependent variable Y than on variable Y 1, how can you see if that is the case? Certainly not by making categorical decisions based on p values (significant effect on Y 1, not significant effect on Y this wouldn t mean that the effect on Y 1 and Y were significantly different!). Instead, you should enter Y 1 and Y into a MANOVA. Mauchly s test (for sphericity of the covariance matrix). Originally from Mauchly (1940). See sphericity, covariance matrix, and p. 5. Mean square (MS). A sum of squares (SS) divided by the corresponding number of degrees of freedom (df), or number of independent observations upon which your SS was based. This gives you the mean squared deviation from the mean, or the mean square. Effectively, a variance. Mixed model. An ANOVA model that includes both between-subjects and within-subjects predictor variables. Alternatively, one that includes both fixed and random factors. The two uses are often equivalent in practice, since Subjects is usually a random factor. Multiple regression. Predicting a dependent variable on the basis of two or more continuous variables. Equivalent to ANOVA with two or more covariates. Nested design. An ANOVA design in which variability due to one factor is nested within variability due to another factor. For example, if one were to administer four different tests to four school classes (i.e. a between-groups factor with four levels), and two of those four classes are in school A, whereas the other two classes are in school B, then the levels of the first factor (four different tests) would be nested in the second factor (two different schools). A very common example is a design with one between-subjects factor and one withinsubjects factor, written A (U S); variation due to subjects is nested within variation due to A (or, for short-hand, S is nested within A), because each subject is only tested at one level of the between-subjects factor(s). We might write this S/A ( S is nested within A ); SPSS uses the alternative notation of S(A). See also doubly-nested design.
5 9: Glossary 18 Nonadditive model. In within-subjects ANOVA, a structural model that allows that the effects of within-subjects treatments can differ across subjects. Null hypothesis. For a general discussion of null hypotheses, see handouts at In a one-way ANOVA, when you test the main effect of a factor A with a levels, your null hypothesis is that µ 1 = µ = = µ a. If you reject this null hypothesis (if your F ratio is large and significant), you conclude that the effects of all a levels of A were not the same. But if there are > levels of A, you do not yet know which levels differed from each other; see post hoc tests. One-way ANOVA. ANOVA with a single between-subjects factor. Order effects. See within-subjects. Overparameterized model. A way of specifying a general linear model design matrix in which a separate predictor variable is created for each group identified by a factor. For example, to code Sex, one variable would be created in which males score 1 and females score 0, and another variable would be created in which males score 0 and females score 1. These two variables contain mutually redundant information: there are more predictor variables than are necessary to determine the relationship of a set of predictors to a set of dependent variables. Compare sigma-restricted model. Planned contrasts. Linear contrasts run as a priori tests. Polynomial ANCOVA. An ANCOVA in which a nonlinear term is used as a predictor variable (such as x, x 3, rather than the usual x). See Myers & Well (1995, p. 460). Post hoc tests. Statistical tests you run after an ANOVA to examine the nature of any main effects or interactions you found. For example, if you had an ANOVA with a single between-subjects factor with three levels, sham/core/shell, and you found a main effect of this factor, was this due to a difference between sham and core subjects? Sham and shell? Shell and core? Are all of them different? There are many post hoc tests available for this sort of purpose. However, there are statistical pitfalls if you run many post-hoc tests; you may make Type I errors (see handouts at simply because you are running lots of tests. Post hoc tests may include further ANOVAs of subsets of your original data for example, after finding a significant Group Difficulty interaction, you might ask whether there was a simple effect of Group at the easy level of the Difficulty factor, and whether there was a simple effect of Group at the difficult level of the Difficulty factor (see pp. 0, 39 ). Power of an ANOVA. Complex to work out. But things that increase the expected F ratio for a particular term if the null hypothesis is false increase power, MSpredictor SSpredictor dferror and F = =. Bigger samples contribute to a larger MSerror SSerror dfpredictor df for your error term; this therefore decreases MS error and increases the expected F if the null hypothesis is false, and this therefore increases your power. The larger the ratio of E(MS treatment ) to E(MS error ), the larger your power. Sometimes two different structural models give you different EMS ratios; you can use this principle to find out which is more powerful for detecting the effects of a particular effect (see p. 73 ). For references to methods of calculating power directly, see p. 10. Predictor variable. Factors and covariates: things that you use to predict your dependent variable. Pseudoreplication. What you do when you analyse correlated data without accounting for the correlation. A Bad Thing entirely Wrong. For example, you could take 3 subjects, measure each 10 times, and pretend that you had 30 independent measurements. No, no, no, no, no. Account for the correlation in your analysis (in this case, by introducing a Subject factor and using an appropriate ANOVA design with a within-subjects factor). Random factor. A factor whose levels we have sampled at random from many possible alternatives. For example, Subjects is a random factor we pick our subjects out of a large potential pool, and if we repeat the experiment, we may use different subjects. Compare fixed factor and see p. 31.
6 9: Glossary 19 Regression ANOVA. Performing linear regression using ANOVA. A simple linear regression is an ANOVA with a single covariate (i.e. ANCOVA) and no other factors. Repeated measures. Same as within-subjects. Repeated measures is the more general term within-subjects designs involve repeated measurements of the same subject, but things other than subjects can also be measured repeatedly. In general, within-subjects/repeated-measures analysis is to do with accounting for relatedness between sets of observations above that you d expect by chance. Repeated measurement of a subject will tend to generate data that are more closely related (by virtue of coming from the same subject) than data from different subjects. Robust. A test that gives correct p values even when its assumptions are violated to some degree ( this test is fairly robust to violation of the normality assumption ). Sequence effects. See within-subjects. Sigma-restricted model. A way of specifying a general linear model in which a categorical variable with k possible levels is coded in a design matrix with k 1 variables. The values used to code membership of particular groups sum to zero. For example, to code Sex, one variable would be created in which males score +1 and females 1. Compare overparameterized model. Simple effect. An effect of one factor considered at only one level of another factor. A simple effect of A at level of factor B is written A at B or A/B. See main effect, interaction, and pp. 0, 39. Source of variance (SV). Something contributing to variation in a dependent variable. Includes predictor variables and error variability. Sphericity assumption. An important assumption applicable to within-subjects (repeated measures) ANOVA. Sphericity is the assumption of homogeneity of variance of difference scores. Suppose we test 5 subjects at three levels of A. We can therefore calculate three sets of difference scores (A 3 A ), (A A 1 ), and (A 3 A 1 ), for each subject. Sphericity is the assumption that the variances of these difference scores are the same. See p. 5. Standard deviation. The square root of the variance. Structural model. An equation giving the value of the dependent variable in terms of sources of variability including predictor variables and error variability. Sum of squares (SS). In full, the sum of the squared deviations from the mean. See variance. Sums of squares are used in preference to actual variances in ANOVA, because sample sums of squares are additive (you can add them up and they still mean something) whereas sample variances are not additive unless they re based on the same number of degrees of freedom. t test, one-sample. Equivalent to testing MS intercept /MS error with an ANOVA with no other factors (odd as that sounds). F 1, k = t k and t k = F1, k. See intercept. t test, two-sample, paired. Equivalent to ANOVA with one within-subjects factor with two levels. F = t and t k = F1, k. 1, k k t test, two-sample, unpaired. Equivalent to ANOVA with one betweensubjects factor with two levels. F 1, k = t k and t k = F1, k. Variance. To calculate the variance of a set of observations, take each observation and subtract it from the mean. This gives you a set of deviations from the mean. Square them and add them up. At this stage you have the sum of the squared deviations from the mean, also known as the sum of squares (SS). Divide by the number of independent observations you have (n for the population variance; n 1 for the sample variance; or, in general, the number of degrees of freedom) to get the variance. See the Background Knowledge handouts at Within-subjects (factor or covariate). See also repeated measures. If a score is obtained for every subject at each level of an independent variable, the independent variable is called a within-subjects factor. See also between-subjects. The advantage of a within-subjects design is that the different treatment conditions are automatically matched on many irrelevant variables all those that
7 are relatively unchanging characteristics of the subject (e.g. intelligence, age). However, the design requires that each subject is tested several times, under different treatment conditions. Care must be taken to avoid order, sequence or carryover effects such as the subject getting better through practice, worse through fatigue, drug hangovers, and so on. If the effect of a treatment is permanent, it is not possible to use a within-subjects design. You could not, for example, use a within-subjects design to study the effects of parachutes (versus no parachute) on mortality rates after falling out of a plane. 9: Glossary 0
8 10: Further reading 1 10 Further reading A very good statistics textbook for psychology is Howell (1997). Abelson (1995) examines statistics as an technique of argument and is very clear on the logical principles and some of the philosophy of statistics. Keppel (1991) is a fairly hefty tome on ANOVA techniques. Winer (1991) is another monster reference book. Neither are for the faint-hearted. Myers & Well (1995) is another excellent one. Less fluffy than Howell (1997) but deals with the issues head on. There is also an excellent series of Statistics Notes published by the British Medical Journal, mostly by Bland and Altman. A list is available at and the articles themselves are available online from This series includes the following: The problem of the unit of analysis (Altman & Bland, 1997). Correlation and regression when repeated measurements are taken, and the problem of pseudoreplication (Bland & Altman, 1994a). The approach one should take to measure correlation within subjects (Bland & Altman, 1995a) and correlation between subjects (Bland & Altman, 1995b). Why correlation is utterly inappropriate for assessing whether two ways of measuring something agree (Bland & Altman, 1986). Generalization and extrapolation (Altman & Bland, 1998). Why to randomize (Altman & Bland, 1999b), how to randomize (Altman & Bland, 1999a), and how to match subjects to different experimental groups (Bland & Altman, 1994b). Blinding (Day & Altman, 000; Altman & Schulz, 001). Absence of evidence is not evidence of absence about power (Altman & Bland, 1995). Multiple significance tests: the problem (Bland & Altman, 1995c). Regression to the mean (Bland & Altman, 1994e; Bland & Altman, 1994d). One-tailed and two-tailed significance tests (Bland & Altman, 1994c). Transforming data (Bland & Altman, 1996b) and how to calculate confidence intervals with transformed data (Bland & Altman, 1996c; Bland & Altman, 1996a). ANOVA, briefly (Altman & Bland, 1996), and the analysis of interaction effects (Altman & Matthews, 1996; Matthews & Altman, 1996a; Matthews & Altman, 1996b). Comparing estimates derived from separate analyses (Altman & Bland, 003). Dealing with differences in baseline by ANCOVA (Vickers & Altman, 001). Finally, there s an excellent on-line textbook (StatSoft, 00):
9 11: Bibliography 11 Bibliography Abelson, R. P. (1995). Statistics As Principled Argument, Lawrence Erlbaum, Hillsdale, New Jersey. Altman, D. G. & Bland, J. M. (1995). Absence of evidence is not evidence of absence. British Medical Journal 311: 485. Altman, D. G. & Bland, J. M. (1996). Comparing several groups using analysis of variance. British Medical Journal 31: Altman, D. G. & Bland, J. M. (1997). Statistics notes. Units of analysis. British Medical Journal 314: Altman, D. G. & Bland, J. M. (1998). Generalisation and extrapolation. British Medical Journal 317: Altman, D. G. & Bland, J. M. (1999a). How to randomise. British Medical Journal 319: Altman, D. G. & Bland, J. M. (1999b). Statistics notes. Treatment allocation in controlled trials: why randomise? British Medical Journal 318: 109. Altman, D. G. & Bland, J. M. (003). Interaction revisited: the difference between two estimates. British Medical Journal 36: 19. Altman, D. G. & Matthews, J. N. (1996). Statistics notes. Interaction 1: Heterogeneity of effects. British Medical Journal 313: 486. Altman, D. G. & Schulz, K. F. (001). Statistics notes: Concealing treatment allocation in randomised trials. British Medical Journal 33: Bland, J. M. & Altman, D. G. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. Lancet i: Bland, J. M. & Altman, D. G. (1994a). Correlation, regression, and repeated data. British Medical Journal 308: 896. Bland, J. M. & Altman, D. G. (1994b). Matching. British Medical Journal 309: 118. Bland, J. M. & Altman, D. G. (1994c). One and two sided tests of significance. British Medical Journal 309: 48. Bland, J. M. & Altman, D. G. (1994d). Regression towards the mean. British Medical Journal 308: Bland, J. M. & Altman, D. G. (1994e). Some examples of regression towards the mean. British Medical Journal 309: 780. Bland, J. M. & Altman, D. G. (1995a). Calculating correlation coefficients with repeated observations: Part 1--Correlation within subjects. British Medical Journal 310: 446. Bland, J. M. & Altman, D. G. (1995b). Calculating correlation coefficients with repeated observations: Part --Correlation between subjects. British Medical Journal 310: 633. Bland, J. M. & Altman, D. G. (1995c). Multiple significance tests: the Bonferroni method. British Medical Journal 310: 170. Bland, J. M. & Altman, D. G. (1996a). Transformations, means, and confidence intervals. British Medical Journal 31: Bland, J. M. & Altman, D. G. (1996b). Transforming data. British Medical Journal 31: 770. Bland, J. M. & Altman, D. G. (1996c). The use of transformation when comparing two means. British Medical Journal 31: Box, G. E. P. (1954). Some theorems on quadratic forms applied in the study of analysis of variance problems: II. Effect of inequality of variance and of correlation of errors in the two-way classification. Annals of Mathematical Statistics 5: Boyd, O., Mackay, C. J., Lamb, G., Bland, J. M., Grounds, R. M. & Bennett, E. D. (1993). Comparison of clinical information gained from routine blood-gas analysis and from gastric tonometry for intramural ph. Lancet 341: Cardinal, R. N., Parkinson, J. A., Djafari Marbini, H., Toner, A. J., Bussey, T. J., Robbins, T. W. & Everitt, B. J. (003). Role of the anterior cingulate cortex in the control over behaviour by Pavlovian conditioned stimuli in rats. Behavioral Neuroscience 117: Cohen, J. (1988). Statistical power analysis for the behavioral sciences. First edition, Academic Press, New York. Day, S. J. & Altman, D. G. (000). Statistics notes: blinding in clinical trials and other studies. British Medical Journal 31: 504. Field, A. P. (1998). A bluffer's guide to sphericity. Newsletter of the Mathematical, Statistical and computing section of the British Psychological Society 6: 13-. Frank, H. & Althoen, S. C. (1994). Statistics: Concepts and Applications, Cambridge, Cambridge University Press. Greenhouse, S. W. & Geisser, S. (1959). On methods in the analysis of profile data. Psychometrika 4: Howell, D. C. (1997). Statistical Methods for Psychology. Fourth edition, Wadsworth, Belmont, California. Huynh, H. & Feldt, L. S. (1970). Conditions under which mean square ratios in repeated measures designs have exact F-distributions. Journal of the American Statistical Association 65: Keppel, G. (198). Design and analysis: a researcher's handbook. Second edition, Englewood Cliffs: Prentice-Hall, London. Keppel, G. (1991). Design and analysis: a researcher's handbook. Third edition, Prentice-Hall, London. Levene, H. (1960). Robust tests for the equality of variance. In Contributions to probability and statistics (Oklin, I., ed.). Stanford University Press, Palo Alto, California. Lilliefors, H. W. (1967). On the Kolmogorov-Smirnov test for normality with mean and variance unknown. Journal of the American Statistical Association 6: Matthews, J. N. & Altman, D. G. (1996a). Interaction 3: How to examine heterogeneity. British Medical Journal 313: 86. Matthews, J. N. & Altman, D. G. (1996b). Statistics notes. Interaction : Compare effect sizes not P values. British Medical Journal 313: 808. Mauchly, J. W. (1940). Significance test for sphericity of a normal n- variate distribution. Annals of Mathematical Statistics 11: Myers, J. L. & Well, A. D. (1995). Research Design and Statistical Analysis, Lawrence Erlbaum, Hillsdale, New Jersey. Prescott, C. E., Kabzems, R. & Zabek, L. M. (1999). Effects of fertilization on decomposition rate of Populus tremuloides foliar litter in a boreal forest. Canadian Journal of Forest Research 9: Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. (199). Numerical Recipes in C. Second edition, Cambridge University Press, Cambridge, UK. Satterthwaite, F. E. (1946). An approximate distribution of estimates of variance components. Biometrics Bulletin : Shapiro, S. S. & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika 5: SPSS (001). SPSS 11.0 Syntax Reference Guide (spssbase.pdf). StatSoft (00). Electronic Statistics Textbook ( Tulsa, OK. Tangren, J. (00). A Field Guide To Experimental Designs ( 004). Washington State University, Tree Fruit Research and Extension Center. Vickers, A. J. & Altman, D. G. (001). Statistics notes: Analysing controlled trials with baseline and follow up measurements. British Medical Journal 33: Winer, B. J. (1971). Statistical principles in experimental design. Second edition, McGraw-Hill, New York. Winer, B. J., Brown, D. R. & Michels, K. M. (1991). Statistical Principles in Experimental Design, McGraw-Hill, New York, NY.
Analysis of Variance: repeated measures
Analysis of Variance: repeated measures Tests for comparing three or more groups or conditions: (a) Nonparametric tests: Independent measures: Kruskal-Wallis. Repeated measures: Friedman s. (b) Parametric
More informationOverview of Lecture. Survey Methods & Design in Psychology. Correlational statistics vs tests of differences between groups
Survey Methods & Design in Psychology Lecture 10 ANOVA (2007) Lecturer: James Neill Overview of Lecture Testing mean differences ANOVA models Interactions Follow-up tests Effect sizes Parametric Tests
More informationRepeated Measures ANOVA and Mixed Model ANOVA. Comparing more than two measurements of the same or matched participants
Repeated Measures ANOVA and Mixed Model ANOVA Comparing more than two measurements of the same or matched participants Data files Fatigue.sav MentalRotation.sav AttachAndSleep.sav Attitude.sav Homework:
More information11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES
Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are
More informationSUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK
SUMMER 011 RE-EXAM PSYF11STAT - STATISTIK Full Name: Årskortnummer: Date: This exam is made up of three parts: Part 1 includes 30 multiple choice questions; Part includes 10 matching questions; and Part
More informationAdvanced ANOVA Procedures
Advanced ANOVA Procedures Session Lecture Outline:. An example. An example. Two-way ANOVA. An example. Two-way Repeated Measures ANOVA. MANOVA. ANalysis of Co-Variance (): an ANOVA procedure whereby the
More informationComparing 3 Means- ANOVA
Comparing 3 Means- ANOVA Evaluation Methods & Statistics- Lecture 7 Dr Benjamin Cowan Research Example- Theory of Planned Behaviour Ajzen & Fishbein (1981) One of the most prominent models of behaviour
More informationBusiness Statistics Probability
Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment
More information12/30/2017. PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2
PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2 Selecting a statistical test Relationships among major statistical methods General Linear Model and multiple regression Special
More informationReadings Assumed knowledge
3 N = 59 EDUCAT 59 TEACHG 59 CAMP US 59 SOCIAL Analysis of Variance 95% CI Lecture 9 Survey Research & Design in Psychology James Neill, 2012 Readings Assumed knowledge Howell (2010): Ch3 The Normal Distribution
More information3 CONCEPTUAL FOUNDATIONS OF STATISTICS
3 CONCEPTUAL FOUNDATIONS OF STATISTICS In this chapter, we examine the conceptual foundations of statistics. The goal is to give you an appreciation and conceptual understanding of some basic statistical
More informationAnalysis of Variance (ANOVA)
Research Methods and Ethics in Psychology Week 4 Analysis of Variance (ANOVA) One Way Independent Groups ANOVA Brief revision of some important concepts To introduce the concept of familywise error rate.
More informationResearch Methods 1 Handouts, Graham Hole,COGS - version 1.0, September 2000: Page 1:
Research Methods 1 Handouts, Graham Hole,COGS - version 10, September 000: Page 1: T-TESTS: When to use a t-test: The simplest experimental design is to have two conditions: an "experimental" condition
More information11/24/2017. Do not imply a cause-and-effect relationship
Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are highly extraverted people less afraid of rejection
More informationDescribe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo
Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment
More informationTwo-Way Independent Samples ANOVA with SPSS
Two-Way Independent Samples ANOVA with SPSS Obtain the file ANOVA.SAV from my SPSS Data page. The data are those that appear in Table 17-3 of Howell s Fundamental statistics for the behavioral sciences
More informationMULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES
24 MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES In the previous chapter, simple linear regression was used when you have one independent variable and one dependent variable. This chapter
More informationChapter 11. Experimental Design: One-Way Independent Samples Design
11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing
More informationBasic Features of Statistical Analysis and the General Linear Model
01-Foster-3327(ch-01).qxd 9/5/2005 5:48 PM Page 1 1 Basic Features of Statistical Analysis and the General Linear Model INTRODUCTION The aim of this book is to describe some of the statistical techniques
More informationisc ove ring i Statistics sing SPSS
isc ove ring i Statistics sing SPSS S E C O N D! E D I T I O N (and sex, drugs and rock V roll) A N D Y F I E L D Publications London o Thousand Oaks New Delhi CONTENTS Preface How To Use This Book Acknowledgements
More informationThe SAGE Encyclopedia of Educational Research, Measurement, and Evaluation Multivariate Analysis of Variance
The SAGE Encyclopedia of Educational Research, Measurement, Multivariate Analysis of Variance Contributors: David W. Stockburger Edited by: Bruce B. Frey Book Title: Chapter Title: "Multivariate Analysis
More informationOne-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels;
1 One-Way ANOVAs We have already discussed the t-test. The t-test is used for comparing the means of two groups to determine if there is a statistically significant difference between them. The t-test
More informationConfidence Intervals On Subsets May Be Misleading
Journal of Modern Applied Statistical Methods Volume 3 Issue 2 Article 2 11-1-2004 Confidence Intervals On Subsets May Be Misleading Juliet Popper Shaffer University of California, Berkeley, shaffer@stat.berkeley.edu
More informationCHAPTER ONE CORRELATION
CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to
More informationIntroduction to Multilevel Models for Longitudinal and Repeated Measures Data
Introduction to Multilevel Models for Longitudinal and Repeated Measures Data Today s Class: Features of longitudinal data Features of longitudinal models What can MLM do for you? What to expect in this
More informationEcological Statistics
A Primer of Ecological Statistics Second Edition Nicholas J. Gotelli University of Vermont Aaron M. Ellison Harvard Forest Sinauer Associates, Inc. Publishers Sunderland, Massachusetts U.S.A. Brief Contents
More informationClincial Biostatistics. Regression
Regression analyses Clincial Biostatistics Regression Regression is the rather strange name given to a set of methods for predicting one variable from another. The data shown in Table 1 and come from a
More informationBRIEF REPORT. Volker H. Franz & Geoffrey R. Loftus
Psychon Bull Rev (0) 9:395 404 DOI 0.3758/s343-0-030- BRIEF REPORT Standard errors and confidence intervals in within-subjects designs: Generalizing Loftus and Masson (994) and avoiding the biases of alternative
More informationUnit 1 Exploring and Understanding Data
Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile
More informationStudy Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
More informationBIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA
BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA PART 1: Introduction to Factorial ANOVA ingle factor or One - Way Analysis of Variance can be used to test the null hypothesis that k or more treatment or group
More information(C) Jamalludin Ab Rahman
SPSS Note The GLM Multivariate procedure is based on the General Linear Model procedure, in which factors and covariates are assumed to have a linear relationship to the dependent variable. Factors. Categorical
More informationIntroduction to Multilevel Models for Longitudinal and Repeated Measures Data
Introduction to Multilevel Models for Longitudinal and Repeated Measures Data Today s Class: Features of longitudinal data Features of longitudinal models What can MLM do for you? What to expect in this
More informationCLASSICAL AND. MODERN REGRESSION WITH APPLICATIONS
- CLASSICAL AND. MODERN REGRESSION WITH APPLICATIONS SECOND EDITION Raymond H. Myers Virginia Polytechnic Institute and State university 1 ~l~~l~l~~~~~~~l!~ ~~~~~l~/ll~~ Donated by Duxbury o Thomson Learning,,
More informationCHAPTER III METHODOLOGY
24 CHAPTER III METHODOLOGY This chapter presents the methodology of the study. There are three main sub-titles explained; research design, data collection, and data analysis. 3.1. Research Design The study
More informationStill important ideas
Readings: OpenStax - Chapters 1 13 & Appendix D & E (online) Plous Chapters 17 & 18 - Chapter 17: Social Influences - Chapter 18: Group Judgments and Decisions Still important ideas Contrast the measurement
More informationREVIEW ARTICLE. A Review of Inferential Statistical Methods Commonly Used in Medicine
A Review of Inferential Statistical Methods Commonly Used in Medicine JCD REVIEW ARTICLE A Review of Inferential Statistical Methods Commonly Used in Medicine Kingshuk Bhattacharjee a a Assistant Manager,
More informationInvestigating the robustness of the nonparametric Levene test with more than two groups
Psicológica (2014), 35, 361-383. Investigating the robustness of the nonparametric Levene test with more than two groups David W. Nordstokke * and S. Mitchell Colp University of Calgary, Canada Testing
More informationSheila Barron Statistics Outreach Center 2/8/2011
Sheila Barron Statistics Outreach Center 2/8/2011 What is Power? When conducting a research study using a statistical hypothesis test, power is the probability of getting statistical significance when
More informationINADEQUACIES OF SIGNIFICANCE TESTS IN
INADEQUACIES OF SIGNIFICANCE TESTS IN EDUCATIONAL RESEARCH M. S. Lalithamma Masoomeh Khosravi Tests of statistical significance are a common tool of quantitative research. The goal of these tests is to
More informationBiostatistics II
Biostatistics II 514-5509 Course Description: Modern multivariable statistical analysis based on the concept of generalized linear models. Includes linear, logistic, and Poisson regression, survival analysis,
More informationSPRING GROVE AREA SCHOOL DISTRICT. Course Description. Instructional Strategies, Learning Practices, Activities, and Experiences.
SPRING GROVE AREA SCHOOL DISTRICT PLANNED COURSE OVERVIEW Course Title: Basic Introductory Statistics Grade Level(s): 11-12 Units of Credit: 1 Classification: Elective Length of Course: 30 cycles Periods
More information7 Statistical Issues that Researchers Shouldn t Worry (So Much) About
7 Statistical Issues that Researchers Shouldn t Worry (So Much) About By Karen Grace-Martin Founder & President About the Author Karen Grace-Martin is the founder and president of The Analysis Factor.
More informationANOVA in SPSS (Practical)
ANOVA in SPSS (Practical) Analysis of Variance practical In this practical we will investigate how we model the influence of a categorical predictor on a continuous response. Centre for Multilevel Modelling
More informationCorrelation and regression
PG Dip in High Intensity Psychological Interventions Correlation and regression Martin Bland Professor of Health Statistics University of York http://martinbland.co.uk/ Correlation Example: Muscle strength
More informationHypothesis Testing. Richard S. Balkin, Ph.D., LPC-S, NCC
Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric statistics
More informationMultiple Regression. James H. Steiger. Department of Psychology and Human Development Vanderbilt University
Multiple Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Multiple Regression 1 / 19 Multiple Regression 1 The Multiple
More informationThe Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016
The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016 This course does not cover how to perform statistical tests on SPSS or any other computer program. There are several courses
More informationApplication of Local Control Strategy in analyses of the effects of Radon on Lung Cancer Mortality for 2,881 US Counties
Application of Local Control Strategy in analyses of the effects of Radon on Lung Cancer Mortality for 2,881 US Counties Bob Obenchain, Risk Benefit Statistics, August 2015 Our motivation for using a Cut-Point
More informationFORM C Dr. Sanocki, PSY 3204 EXAM 1 NAME
PSYCH STATS OLD EXAMS, provided for self-learning. LEARN HOW TO ANSWER the QUESTIONS; memorization of answers won t help. All answers are in the textbook or lecture. Instructors can provide some clarification
More informationFind the slope of the line that goes through the given points. 1) (-9, -68) and (8, 51) 1)
Math 125 Semester Review Problems Name Find the slope of the line that goes through the given points. 1) (-9, -68) and (8, 51) 1) Solve the inequality. Graph the solution set, and state the solution set
More informationBasic Statistics and Data Analysis in Work psychology: Statistical Examples
Basic Statistics and Data Analysis in Work psychology: Statistical Examples WORK PSYCHOLOGY INTRODUCTION In this chapter we examine a topic which is given too little coverage in most texts of this kind,
More informationStatistical Techniques. Masoud Mansoury and Anas Abulfaraj
Statistical Techniques Masoud Mansoury and Anas Abulfaraj What is Statistics? https://www.youtube.com/watch?v=lmmzj7599pw The definition of Statistics The practice or science of collecting and analyzing
More informationReadings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F
Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F Plous Chapters 17 & 18 Chapter 17: Social Influences Chapter 18: Group Judgments and Decisions
More informationA review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) *
A review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) * by J. RICHARD LANDIS** and GARY G. KOCH** 4 Methods proposed for nominal and ordinal data Many
More informationChapter 1: Exploring Data
Chapter 1: Exploring Data Key Vocabulary:! individual! variable! frequency table! relative frequency table! distribution! pie chart! bar graph! two-way table! marginal distributions! conditional distributions!
More informationTHE UNIVERSITY OF SUSSEX. BSc Second Year Examination DISCOVERING STATISTICS SAMPLE PAPER INSTRUCTIONS
C8552 THE UNIVERSITY OF SUSSEX BSc Second Year Examination DISCOVERING STATISTICS SAMPLE PAPER INSTRUCTIONS Do not, under any circumstances, remove the question paper, used or unused, from the examination
More informationOne-Way Independent ANOVA
One-Way Independent ANOVA Analysis of Variance (ANOVA) is a common and robust statistical test that you can use to compare the mean scores collected from different conditions or groups in an experiment.
More informationProfile Analysis. Intro and Assumptions Psy 524 Andrew Ainsworth
Profile Analysis Intro and Assumptions Psy 524 Andrew Ainsworth Profile Analysis Profile analysis is the repeated measures extension of MANOVA where a set of DVs are commensurate (on the same scale). Profile
More informationExperimental Studies. Statistical techniques for Experimental Data. Experimental Designs can be grouped. Experimental Designs can be grouped
Experimental Studies Statistical techniques for Experimental Data Require appropriate manipulations and controls Many different designs Consider an overview of the designs Examples of some of the analyses
More informationInferential Statistics
Inferential Statistics and t - tests ScWk 242 Session 9 Slides Inferential Statistics Ø Inferential statistics are used to test hypotheses about the relationship between the independent and the dependent
More informationIAPT: Regression. Regression analyses
Regression analyses IAPT: Regression Regression is the rather strange name given to a set of methods for predicting one variable from another. The data shown in Table 1 and come from a student project
More informationCorrelational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots
Correlational Research Stephen E. Brock, Ph.D., NCSP California State University, Sacramento 1 Correlational Research A quantitative methodology used to determine whether, and to what degree, a relationship
More informationReading Time [min.] Group
The exam set contains 8 questions. The questions may contain sub-questions. Make sure to indicate which question you are answering. The questions are weighted according to the percentage in brackets. Please
More informationDescribe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo
Please note the page numbers listed for the Lind book may vary by a page or two depending on which version of the textbook you have. Readings: Lind 1 11 (with emphasis on chapters 10, 11) Please note chapter
More informationCHAPTER OBJECTIVES - STUDENTS SHOULD BE ABLE TO:
3 Chapter 8 Introducing Inferential Statistics CHAPTER OBJECTIVES - STUDENTS SHOULD BE ABLE TO: Explain the difference between descriptive and inferential statistics. Define the central limit theorem and
More informationCHAPTER VI RESEARCH METHODOLOGY
CHAPTER VI RESEARCH METHODOLOGY 6.1 Research Design Research is an organized, systematic, data based, critical, objective, scientific inquiry or investigation into a specific problem, undertaken with the
More informationThe t-test: Answers the question: is the difference between the two conditions in my experiment "real" or due to chance?
The t-test: Answers the question: is the difference between the two conditions in my experiment "real" or due to chance? Two versions: (a) Dependent-means t-test: ( Matched-pairs" or "one-sample" t-test).
More informationPSY 216: Elementary Statistics Exam 4
Name: PSY 16: Elementary Statistics Exam 4 This exam consists of multiple-choice questions and essay / problem questions. For each multiple-choice question, circle the one letter that corresponds to the
More informationSTATISTICS AND RESEARCH DESIGN
Statistics 1 STATISTICS AND RESEARCH DESIGN These are subjects that are frequently confused. Both subjects often evoke student anxiety and avoidance. To further complicate matters, both areas appear have
More informationData Analysis Using Regression and Multilevel/Hierarchical Models
Data Analysis Using Regression and Multilevel/Hierarchical Models ANDREW GELMAN Columbia University JENNIFER HILL Columbia University CAMBRIDGE UNIVERSITY PRESS Contents List of examples V a 9 e xv " Preface
More informationStill important ideas
Readings: OpenStax - Chapters 1 11 + 13 & Appendix D & E (online) Plous - Chapters 2, 3, and 4 Chapter 2: Cognitive Dissonance, Chapter 3: Memory and Hindsight Bias, Chapter 4: Context Dependence Still
More informationChapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.
Chapter 23 Inference About Means Copyright 2010 Pearson Education, Inc. Getting Started Now that we know how to create confidence intervals and test hypotheses about proportions, it d be nice to be able
More informationCHAPTER TWO REGRESSION
CHAPTER TWO REGRESSION 2.0 Introduction The second chapter, Regression analysis is an extension of correlation. The aim of the discussion of exercises is to enhance students capability to assess the effect
More informationMODULE S1 DESCRIPTIVE STATISTICS
MODULE S1 DESCRIPTIVE STATISTICS All educators are involved in research and statistics to a degree. For this reason all educators should have a practical understanding of research design. Even if an educator
More informationAssessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University.
Running head: ASSESS MEASUREMENT INVARIANCE Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies Xiaowen Zhu Xi an Jiaotong University Yanjie Bian Xi an Jiaotong
More informationSimple Linear Regression the model, estimation and testing
Simple Linear Regression the model, estimation and testing Lecture No. 05 Example 1 A production manager has compared the dexterity test scores of five assembly-line employees with their hourly productivity.
More informationEXPERIMENTAL RESEARCH DESIGNS
ARTHUR PSYC 204 (EXPERIMENTAL PSYCHOLOGY) 14A LECTURE NOTES [02/28/14] EXPERIMENTAL RESEARCH DESIGNS PAGE 1 Topic #5 EXPERIMENTAL RESEARCH DESIGNS As a strict technical definition, an experiment is a study
More informationDiurnal Pattern of Reaction Time: Statistical analysis
Diurnal Pattern of Reaction Time: Statistical analysis Prepared by: Alison L. Gibbs, PhD, PStat Prepared for: Dr. Principal Investigator of Reaction Time Project January 11, 2015 Summary: This report gives
More informationRegression CHAPTER SIXTEEN NOTE TO INSTRUCTORS OUTLINE OF RESOURCES
CHAPTER SIXTEEN Regression NOTE TO INSTRUCTORS This chapter includes a number of complex concepts that may seem intimidating to students. Encourage students to focus on the big picture through some of
More informationSmall Group Presentations
Admin Assignment 1 due next Tuesday at 3pm in the Psychology course centre. Matrix Quiz during the first hour of next lecture. Assignment 2 due 13 May at 10am. I will upload and distribute these at the
More informationTheoretical Exam. Monday 15 th, Instructor: Dr. Samir Safi. 1. Write your name, student ID and section number.
بسم االله الرحمن الرحيم COMPUTER & DATA ANALYSIS Theoretical Exam FINAL THEORETICAL EXAMINATION Monday 15 th, 2007 Instructor: Dr. Samir Safi Name: ID Number: Instructor: INSTRUCTIONS: 1. Write your name,
More information(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)
UNIT 4 OTHER DESIGNS (CORRELATIONAL DESIGN AND COMPARATIVE DESIGN) Quasi Experimental Design Structure 4.0 Introduction 4.1 Objectives 4.2 Definition of Correlational Research Design 4.3 Types of Correlational
More informationANOVA. Thomas Elliott. January 29, 2013
ANOVA Thomas Elliott January 29, 2013 ANOVA stands for analysis of variance and is one of the basic statistical tests we can use to find relationships between two or more variables. ANOVA compares the
More informationinvestigate. educate. inform.
investigate. educate. inform. Research Design What drives your research design? The battle between Qualitative and Quantitative is over Think before you leap What SHOULD drive your research design. Advanced
More informationModeration in management research: What, why, when and how. Jeremy F. Dawson. University of Sheffield, United Kingdom
Moderation in management research: What, why, when and how Jeremy F. Dawson University of Sheffield, United Kingdom Citing this article: Dawson, J. F. (2014). Moderation in management research: What, why,
More informationWELCOME! Lecture 11 Thommy Perlinger
Quantitative Methods II WELCOME! Lecture 11 Thommy Perlinger Regression based on violated assumptions If any of the assumptions are violated, potential inaccuracies may be present in the estimated regression
More informationQuantitative Methods in Computing Education Research (A brief overview tips and techniques)
Quantitative Methods in Computing Education Research (A brief overview tips and techniques) Dr Judy Sheard Senior Lecturer Co-Director, Computing Education Research Group Monash University judy.sheard@monash.edu
More informationDr. Kelly Bradley Final Exam Summer {2 points} Name
{2 points} Name You MUST work alone no tutors; no help from classmates. Email me or see me with questions. You will receive a score of 0 if this rule is violated. This exam is being scored out of 00 points.
More informationAnalysis of single gene effects 1. Quantitative analysis of single gene effects. Gregory Carey, Barbara J. Bowers, Jeanne M.
Analysis of single gene effects 1 Quantitative analysis of single gene effects Gregory Carey, Barbara J. Bowers, Jeanne M. Wehner From the Department of Psychology (GC, JMW) and Institute for Behavioral
More informationMMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug?
MMI 409 Spring 2009 Final Examination Gordon Bleil Table of Contents Research Scenario and General Assumptions Questions for Dataset (Questions are hyperlinked to detailed answers) 1. Is there a difference
More informationAnalysis of the Reliability and Validity of an Edgenuity Algebra I Quiz
Analysis of the Reliability and Validity of an Edgenuity Algebra I Quiz This study presents the steps Edgenuity uses to evaluate the reliability and validity of its quizzes, topic tests, and cumulative
More informationTesting Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results.
10 Learning Objectives Testing Means After reading this chapter, you should be able to: Related-Samples t Test With Confidence Intervals 1. Describe two types of research designs used when we select related
More informationFrom Bivariate Through Multivariate Techniques
A p p l i e d S T A T I S T I C S From Bivariate Through Multivariate Techniques R e b e c c a M. W a r n e r University of New Hampshire DAI HOC THAI NGUYEN TRUNG TAM HOC LIEU *)SAGE Publications '55'
More informationSW 9300 Applied Regression Analysis and Generalized Linear Models 3 Credits. Master Syllabus
SW 9300 Applied Regression Analysis and Generalized Linear Models 3 Credits Master Syllabus I. COURSE DOMAIN AND BOUNDARIES This is the second course in the research methods sequence for WSU doctoral students.
More informationStatistics as a Tool. A set of tools for collecting, organizing, presenting and analyzing numerical facts or observations.
Statistics as a Tool A set of tools for collecting, organizing, presenting and analyzing numerical facts or observations. Descriptive Statistics Numerical facts or observations that are organized describe
More informationREPEATED MEASURES DESIGNS
Repeated Measures Designs The SAGE Encyclopedia of Educational Research, Measurement and Evaluation Markus Brauer (University of Wisconsin-Madison) Target word count: 1000 - Actual word count: 1071 REPEATED
More information8/28/2017. If the experiment is successful, then the model will explain more variance than it can t SS M will be greater than SS R
PSY 5101: Advanced Statistics for Psychological and Behavioral Research 1 If the ANOVA is significant, then it means that there is some difference, somewhere but it does not tell you which means are different
More informationPSYCH-GA.2211/NEURL-GA.2201 Fall 2016 Mathematical Tools for Cognitive and Neural Science. Homework 5
PSYCH-GA.2211/NEURL-GA.2201 Fall 2016 Mathematical Tools for Cognitive and Neural Science Homework 5 Due: 21 Dec 2016 (late homeworks penalized 10% per day) See the course web site for submission details.
More information